Session C-3

Learning and Prediction

Conference
10:00 AM — 11:30 AM EDT
Local
May 4 Wed, 10:00 AM — 11:30 AM EDT

Boosting Internet Card Cellular Business via User Portraits: A Case of Churn Prediction

Fan Wu and Ju Ren (Tsinghua University, China); Feng Lyu (Central South University, China); Peng Yang (Huazhong University of Science and Technology, China); Yongmin Zhang and Deyu Zhang (Central South University, China); Yaoxue Zhang (Tsinghua University, China)

0
Internet card (IC) as a new business model emerges, which penetrates rapidly and holds the potential to foster a great business market. However, the understanding of IC user portraits is insufficient, which is the building block to boost the IC business. In this paper, we take the lead to bridge the gap by studying one large-scale dataset collected from a provincial network operator of China, which contains about 4 million IC users and 22 million traditional card (TC) users.
Particularly, we first conduct a systematical analysis on usage data by investigating the difference of two types of users, examining the impact of user properties, and characterizing the spatio-temporal networking patterns. After that, we shed light on one specific business case of churn prediction by devising an IC user Churn Prediction model, named ICCP, which consists of a feature extraction component and a learning architecture design. In ICCP, both the static portrait features and temporal sequential features are extracted, and one principal component analysis block and the embedding/transformer layers are devised to learn the respective information of two types of features, which are collectively fed into the classification multilayer perceptron layer for prediction. Extensive experiments corroborate the efficacy of ICCP.

Lumos: towards Better Video Streaming QoE through Accurate Throughput Prediction

Gerui Lv, Qinghua Wu, Weiran Wang and Zhenyu Li (Institute of Computing Technology, Chinese Academy of Sciences, China); Gaogang Xie (CNIC Chinese Academy of Sciences & University of Chinese Academy of Sciences, China)

0
ABR algorithms play an essential role in optimizing QoE of video streaming, via dynamically selecting the bitrate of chunks based on the network capacity. To estimate the network capacity, most ABR algorithms use throughput prediction while some recent work advocates delivery time prediction. We in this paper build an automated video streaming measurement platform, and collect extensive dataset in various network environments, containing more than 400 hours of playing time. Based on the dataset, we find that most of previous works failed to predict throughput accurately due to the ignorance of the strong correlation between chunk size and throughput. This correlation is deeply affected by the state of client player, the chunk index, and the signal strength of the mobile client platform, all of which should be considered for more accurate throughput prediction. Moreover, we show that throughput is a better indicator than delivery time in terms of prediction error, due to the long tail distribution of delivery time. Further, we propose a decision-tree-based throughput predictor, named Lumos, which acts as a plug-in for ABR algorithms. Extensive experiments in the wild demonstrate that Lumos achieves high prediction accuracy and improves the QoE of ABR algorithms when integrated into them.

Poisoning Attacks on Deep Learning based Wireless Traffic Prediction

Tianhang Zheng and Baochun Li (University of Toronto, Canada)

0
Big client data and deep learning bring a new level of accuracy to wireless traffic prediction in non-adversarial environments. However, in a malicious client environment, the training-stage vulnerability of deep learning (DL) based wireless traffic prediction remains under-explored. In this paper, we conduct the first systematic study on training-stage poisoning attacks against DL-based wireless traffic prediction in both centralized and distributed training scenarios. In contrast to previous poisoning attacks on computer vision, we consider a more practical threat model, specific to wireless traffic prediction, to design these poisoning attacks. In particular, we assume that potential malicious clients do not collude or have any additional knowledge about the other clients' data. We propose a perturbation masking strategy and a tuning-and-scaling method to fit data and model poisoning attacks into the practical threat model. We also explore potential defenses against these poisoning attacks and propose two defense methods. Through extensive evaluations, we show the mean square error (MSE) can be increased by over 50% to 10 8 times with our proposed poisoning attacks. We also demonstrate the effectiveness of our data sanitization approach and anomaly detection method against our poisoning attacks in centralized and distributed scenarios.

PreGAN: Preemptive Migration Prediction Network for Proactive Fault-Tolerant Edge Computing

Shreshth Tuli and Giuliano Casale (Imperial College London, United Kingdom (Great Britain)); Nicholas Jennings (Imperial College, United Kingdom (Great Britain))

1
Building a fault-tolerant edge system that can quickly react to node overloads or failures is challenging due to the unreliability of edge devices and the strict service deadlines of modern applications. Moreover, unnecessary task migrations can stress the system network, giving rise to the need for a smart and parsimonious failure recovery scheme. Prior approaches often fail to adapt to highly volatile workloads or accurately detect and diagnose faults for optimal remediation. There is thus a need for a robust and proactive fault-tolerance mechanism to meet service level objectives. In this work, we propose PreGAN, a composite AI model using a Generative Adversarial Network (GAN) to predict preemptive migration decisions for proactive fault-tolerance in containerized edge deployments. PreGAN uses co-simulations in tandem with a GAN to learn a few-shot anomaly classifier and proactively predict migration decisions for reliable computing. Extensive experiments on a Raspberry-Pi based edge environment show that PreGAN can outperform state-of-the-art baseline methods in fault-detection, diagnosis and classification, thus achieving high quality of service. PreGAN accomplishes this by 5.1% more accurate fault detection, higher diagnosis scores and 23.8% lower overheads compared to the best method among the considered baselines.

Session Chair

Ruozhou Yu (North Carolina State University)

Session C-6

Learning at the Edge

Conference
4:30 PM — 6:00 PM EDT
Local
May 4 Wed, 4:30 PM — 6:00 PM EDT

Decentralized Task Offloading in Edge Computing: A Multi-User Multi-Armed Bandit Approach

Xiong Wang (Huazhong University of Science and Technology, China); Jiancheng Ye (Huawei, Hong Kong); John C.S. Lui (The Chinese University of Hong Kong, Hong Kong)

0
Mobile edge computing facilitates users to offload computation tasks to edge servers for meeting their stringent delay requirements. Previous works mainly explore task offloading when system-side information is given (e.g., server processing speed, cellular data rate), or centralized offloading under system uncertainty. But both generally fall short to handle task placement involving many coexisting users in a dynamic and uncertain environment. In this paper, we develop a multi-user offloading framework considering unknown yet stochastic system-side information to enable a decentralized user-initiated service placement. Specifically, we formulate the dynamic task placement as an online multi-user multi-armed bandit process, and propose a decentralized epoch based offloading (DEBO) to optimize user rewards which are subjected under network delay. We show that DEBO can deduce the optimal user-server assignment, thereby achieving a close-to-optimal service performance and tight O(log T) offloading regret. Moreover, we generalize DEBO to various common scenarios such as unknown reward gap, dynamic entering or leaving of clients, and fair reward distribution, while further exploring when users' offloaded tasks require heterogeneous computing resources. Particularly, we accomplish a sub-linear regret for each of these instances. Real measurements based evaluations corroborate the superiority of our offloading schemes over state-of-the-art approaches in optimizing delay-sensitive rewards.

Deep Learning on Mobile Devices Through Neural Processing Units and Edge Computing

Tianxiang Tan and Guohong Cao (The Pennsylvania State University, USA)

0
Deep Neural Network (DNN) is becoming adopted for video analytics on mobile devices. To reduce the delay of running DNNs, many mobile devices are equipped with Neural Processing Units (NPU). However, due to the resource limitations of NPU, these DNNs have to be compressed to increase the processing speed at the cost of accuracy. To address the low accuracy problem, we propose a Confidence Based Offloading (CBO) framework for deep learning video analytics. The major challenge is to determine when to return the NPU classification result based on the confidence level of running the DNN, and when to offload the video frames to the server for further processing to increase the accuracy. We first identify the problem of using existing confidence scores to make offloading decisions, and propose confidence score calibration techniques to improve the performance. Then, we formulate the CBO problem where the goal is to maximize accuracy under some time constraint, and propose an adaptive solution that determines which frames to offload at what resolution based on the confidence score and the network condition. Through real implementations and extensive evaluations, we demonstrate that the proposed solution can significantly outperform other approaches.

Learning-based Multi-Drone Network Edge Orchestration for Video Analytics

Chengyi Qu, Rounak Singh, Alicia Esquivel Morel and Prasad Calyam (University of Missouri-Columbia, USA)

0
Unmanned aerial vehicles (a.k.a. drones) with high-resolution video cameras are useful for applications in e.g., public safety and smart farming.
Inefficient configurations in drone video analytics applications due to edge network misconfigurations can result in degraded video quality and inefficient resource utilization. In this paper, we present a novel scheme for offline/online learning-based network edge orchestration to achieve pertinent selection of both network protocols and video properties in multi-drone based video analytics. Our approach features both supervised and unsupervised machine learning algorithms to enable decision making for selection of both network protocols and video properties in the drones' pre-takeoff stage i.e., offline stage. In addition, our approach facilitates drone trajectory optimization during drone flights through an online reinforcement learning-based multi-agent deep Q-network algorithm. Evaluation results show how our offline orchestration can suitably choose network protocols (i.e., amongst TCP/HTTP, UDP/RTP, QUIC). We also demonstrate how our unsupervised learning approach outperforms existing learning approaches, and achieves efficient offloading while also improving the network performance (i.e., throughput and round-trip time) by least 25% with satisfactory video quality. Lastly, we show via trace-based simulations, how our online orchestration achieves 91% of oracle baseline network throughput performance with comparable video quality.

Online Model Updating with Analog Aggregation in Wireless Edge Learning

Juncheng Wang (University of Toronto, Canada); Min Dong (Ontario Tech University, Canada); Ben Liang (University of Toronto, Canada); Gary Boudreau (Ericsson, Canada); Hatem Abou-Zeid (University of Calgary, Canada)

0
We consider federated learning in a wireless edge network, where multiple power-limited mobile devices collaboratively train a global model, using their local data with the assistance of an edge server. Exploiting over-the-air computation, the edge server updates the global model via analog aggregation of the local models over noisy wireless fading channels. Unlike existing works that separately optimize computation and communication at each step of the learning algorithm, in this work, we jointly optimize the training of the global model and the analog aggregation of local models over time. Our objective is to minimize the accumulated training loss at the edge server, subject to individual long-term transmit power constraints at the mobile devices. We propose an efficient algorithm, termed Online Model Updating with Analog Aggregation (OMUAA), to adaptively update the local and global models based on the time-varying communication environment. The trained model of OMUAA is channel- and power-aware, and it is in closed form with low computational complexity. We derive performance bounds on the computation and communication performance metrics. Simulation results based on real-world image classification datasets and typical Long-Term Evolution network settings demonstrate substantial performance gain of OMUAA over the known best alternatives.

Session Chair

Stephen Lee (University of Pittsburgh)

Made with in Toronto · Privacy Policy · INFOCOM 2020 · INFOCOM 2021 · © 2022 Duetone Corp.